Àá½Ã¸¸ ±â´Ù·Á ÁÖ¼¼¿ä. ·ÎµùÁßÀÔ´Ï´Ù.
KMID : 1134220150350010044
Hanyang Medical Reviews
2015 Volume.35 No. 1 p.44 ~ p.49
Measurement of Inter-Rater Reliability in Systematic Review
Park Chang-Un

Kim Hyun-Jung
Abstract
Inter-rater reliability refers to the degree of agreement when a measurement is repeated under identical conditions by different raters. In systematic review, it can be used to evaluate agreement between authors in the process of extracting data. While there have been a variety of methods to measure inter-rater reliability, percent agreement and Cohen¡¯s kappa are commonly used in the categorical data. Percent agreement is an amount of actually observed agreement. While the calculation is simple, it has a limitation in that the effect of chance in achieving agreement between raters is not accounted for. Cohen¡¯s kappa is a more robust method than percent agreement since it is an adjusted agreement considering the effect of chance. The interpretation of kappa can be misled, because it is sensitive to the distribution of data. Therefore, it is desirable to present both values of percent agreement and kappa in the review. If the value of kappa is too low in spite of high observed agreement, alternative statistics can be pursued.
KEYWORD
Agreement, Inter-Rater, Kappa, Rater, Reliability
FullTexts / Linksout information
Listed journal information
KoreaMed